© Edward Stull 2018
Edward StullUX Fundamentals for Non-UX Professionalshttps://doi.org/10.1007/978-1-4842-3811-0_42

42. Heuristic Review

Edward Stull1 
(1)
Upper Arlington, Ohio, USA
 

Every few months I cook for a group of friends. I like to cook, but my dinners are more a series of ill-timed surprises than cohesive meals. I serve food at random intervals. A main dish may be served in 20 minutes; salad, 30 minutes later; bread, after everyone finishes eating; side dishes, abandoned in the oven for hours, never make an appearance. However, with enough wine, everything generally works out in the end.

At any time during my dinner, an experienced cook could point out what needs fixing: a pinch of salt here, turn the heat up there, put out the fire in the oven, and so on. This type of review is a heuristic: an analysis and grading of individual parts. You can apply a heuristic to nearly anything, from the preparation of a meal to the user experience of software.

Some meals are better than others. Rather than say, “This dinner is terrible,” a dispassionate reviewer might give it a passing or failing grade, “Dinner: Fail.” Likewise, rather than say, “The chicken paprikash tastes like burnt Saran Wrap,” it would be more precise to rate it as “Main Dish: Severity 5.”

Applications, especially large ones, can be overwhelming to analyze in their entirety. However, large or small, applications are made of parts. We can evaluate each part and determine if it is acceptable, like a food critic evaluating each of a meal’s courses.

Heuristic scoring uses pass/fail grades, 0–5 ranges, and 0–100 percentages, or any combination of numeric or Boolean values.

We describe the pass/fail acceptability of a home page like the following:
  • Home page (Pass)

Breaking down the home page into individual parts further clarifies our review:
  • Search field (Pass)

  • Hero image (Fail)

  • Body copy (Pass)

Grading parts on a number scale (see Figure 42-1) is a more exacting approach, allowing us to compute an average score. Whereas 0 is severely problematic, 5 is marvelous:
  • Search field (5)

  • Hero image (2)

  • Body copy (4)

  • --------------

  • Home page (3.6) = The average score of the page

Of note: a heuristic’s average is the sum of all scores divided by the number of scores. (5 + 2 + 4) / 3 scores = 3.6

../images/464548_1_En_42_Chapter/464548_1_En_42_Fig1_HTML.jpg
Figure 42-1.

Assigning a numerical score to screen elements

In 1990, noted 1usability experts Jakob Nielsen and Rolf Molich created a robust set of usability heuristics that are still in use today. The Nielsen heuristics cover everything from aesthetics to error prevention. Read more about “10 Usability Heuristics for User Interface Design” at https://goo.gl/zoQKAN . Researchers use several other frameworks, as well, such as Gerhardt-Powals and Weinschenk and Barker.

We can use such heuristics—or any other set you devise—to further evaluate each part of an application:
  • Search field, aesthetics (4)

  • Search field, error prevention (5)

  • Search field, copywriting (4)

  • Search field, performance (5)

  • Search field, HTML input type (5)

  • ----------------------------------

  • Search field (4.6)

Once you apply a heuristic across your application, you indicate what works and what does not—what needs your attention and what can wait. Perhaps you will choose to tackle parts graded from three to five, or fix everything marked as a “fail.”

Evaluate each part, make improvements, and your guests will return for second helpings.

Key Takeaways

  • Heuristic scoring uses numeric or Boolean values to evaluate the fitness of an experience.

  • Several existing heuristic evaluations exists, such as Nielsen and Gerhardt-Powals and Weinschenk and Barker.

Questions to Ask Yourself

  • What parts of an experience do I wish to evaluate?

  • Should I create my own heuristics or leverage an existing framework?

  • What is the most appropriate scoring method for the evaluation—pass/fail, number range, or percentage?

  • How can I pair the heuristic evaluation with user testing and other research activities?

    Reset